20 research outputs found

    Theory and practice of the ternary relations model of information management

    Get PDF
    This thesis proposes a new, highly generalised and fundamental, information-modelling framework called the TRM (Ternary Relations Model). The TRM was designed to be a model for converging a number of differing paradigms of information management, some of which are quite isolated. These include areas such as: hypertext navigation; relational databases; semi-structured databases; the Semantic Web; ZigZag and workflow modelling. While many related works model linking by the connection of two ends, the TRM adds a third element to this, thereby enriching the links with associative meanings. The TRM is a formal description of a technique that establishes bi-directional and dynamic node-link structures in which each link is an ordered triple of three other nodes. The key features that makes the TRM distinct from other triple-based models (such as RDF) is the integration of bi-directionality, functional links and simplicity in the definition and elements hierarchy. There are two useful applications of the TRM. Firstly it may be used as a tool for the analysis of information models, to elucidate connections and parallels. Secondly, it may be used as a “construction kit” to build new paradigms and/or applications in information management. The TRM may be used to provide a substrate for building diverse systems, such as adaptive hypertext, schemaless database, query languages, hyperlink models and workflow management systems. It is, however, highly generalised and is by no means limited to these purposes

    Theory and practice of the ternary relations model of information management

    Get PDF
    This thesis proposes a new, highly generalised and fundamental, information-modelling framework called the TRM (Ternary Relations Model). The TRM was designed to be a model for converging a number of differing paradigms of information management, some of which are quite isolated. These include areas such as: hypertext navigation; relational databases; semi-structured databases; the Semantic Web; ZigZag and workflow modelling. While many related works model linking by the connection of two ends, the TRM adds a third element to this, thereby enriching the links with associative meanings. The TRM is a formal description of a technique that establishes bi-directional and dynamic node-link structures in which each link is an ordered triple of three other nodes. The key features that makes the TRM distinct from other triple-based models (such as RDF) is the integration of bi-directionality, functional links and simplicity in the definition and elements hierarchy. There are two useful applications of the TRM. Firstly it may be used as a tool for the analysis of information models, to elucidate connections and parallels. Secondly, it may be used as a “construction kit” to build new paradigms and/or applications in information management. The TRM may be used to provide a substrate for building diverse systems, such as adaptive hypertext, schemaless database, query languages, hyperlink models and workflow management systems. It is, however, highly generalised and is by no means limited to these purposes

    A new dynamic approach for non-singleton fuzzification in noisy time-series prediction

    Get PDF
    Non-singleton fuzzification is used to model uncertain (e.g. noisy) inputs within fuzzy logic systems. In the standard approach, assuming the fuzzification type is known, the observed [noisy] input is usually considered to be the core of the input fuzzy set, usually being the centre of its membership function. This paper proposes a new fuzzification method (not type), in which the core of an input fuzzy set is not necessarily located at the observed input, rather it is dynamically adjusted based on statistical methods. Using the weighted moving average, a few past samples are aggregated to roughly estimate where the input fuzzy set should be located. While the added complexity is not huge, applying this method to the well-known Mackey-Glass and Lorenz time-series prediction problems, show significant error reduction when the input is corrupted by different noise levels

    A similarity-based inference engine for non-singleton fuzzy logic systems

    Get PDF
    In non-singleton fuzzy logic systems (NSFLSs) input uncertainties are modelled with input fuzzy sets in order to capture input uncertainty such as sensor noise. The performance of NSFLSs in handling such uncertainties depends both on the actual input fuzzy sets (and their inherent model of uncertainty) and on the way that they affect the inference process. This paper proposes a novel type of NSFLS by replacing the composition-based inference method of type-1 fuzzy relations with a similarity-based inference method that makes NSFLSs more sensitive to changes in the input's uncertainty characteristics. The proposed approach is based on using the Jaccard ratio to measure the similarity between input and antecedent fuzzy sets, then using the measured similarity to determine the firing strength of each individual fuzzy rule. The standard and novel approaches to NSFLSs are experimentally compared for the well-known problem of Mackey-Glass time series predictions, where the NSFLS's inputs have been perturbed with different levels of Gaussian noise. The experiments are repeated for system training under both noisy and noise-free conditions. Analyses of the results show that the new method outperforms the standard approach by substantially reducing the prediction errors

    A similarity-based inference engine for non-singleton fuzzy logic systems

    Get PDF
    In non-singleton fuzzy logic systems (NSFLSs) input uncertainties are modelled with input fuzzy sets in order to capture input uncertainty such as sensor noise. The performance of NSFLSs in handling such uncertainties depends both on the actual input fuzzy sets (and their inherent model of uncertainty) and on the way that they affect the inference process. This paper proposes a novel type of NSFLS by replacing the composition-based inference method of type-1 fuzzy relations with a similarity-based inference method that makes NSFLSs more sensitive to changes in the input's uncertainty characteristics. The proposed approach is based on using the Jaccard ratio to measure the similarity between input and antecedent fuzzy sets, then using the measured similarity to determine the firing strength of each individual fuzzy rule. The standard and novel approaches to NSFLSs are experimentally compared for the well-known problem of Mackey-Glass time series predictions, where the NSFLS's inputs have been perturbed with different levels of Gaussian noise. The experiments are repeated for system training under both noisy and noise-free conditions. Analyses of the results show that the new method outperforms the standard approach by substantially reducing the prediction errors

    Contrasting singleton type-1 and interval type-2 non-singleton type-1 fuzzy logic systems

    Get PDF
    Most applications of both type-1 and type-2 fuzzy logic systems are employing singleton fuzzification due to its simplicity and reduction in its computational speed. However, using singleton fuzzification assumes that the input data (i.e., measurements) are precise with no uncertainty associated with them. This paper explores the potential of combining the uncertainty modelling capacity of interval type-2 fuzzy sets with the simplicity of type-1 fuzzy logic systems (FLSs) by using interval type-2 fuzzy sets solely as part of the non-singleton input fuzzifier. This paper builds on previous work and uses the methodological design of the footprint of uncertainty (FOU) of interval type-2 fuzzy sets for given levels of uncertainty. We provide a detailed investigation into the ability of both types of fuzzy sets (type-1 and interval type-2) to capture and model different levels of uncertainty/noise through varying the size of the FOU of the underlying input fuzzy sets from type-1 fuzzy sets to very “wide” interval type-2 fuzzy sets as part of type-1 non-singleton FLSs using interval type-2 input fuzzy sets. By applying the study in the context of chaotic time-series prediction, we show how, as uncertainty/noise increases, interval type-2 input fuzzy sets with FOUs of increasing size become more and more viable

    Interpretability and complexity of design in the creation of fuzzy logic systems: a user study

    Get PDF
    In recent years, researchers have become increasingly more interested in designing an interpretable Fuzzy Logic System (FLS). Many studies have claimed that reducing the complexity of FLSs can lead to improved model interpretability. That is, reducing the number of rules tends to reduce the complexity of FLSs, thus improving their interpretability. However, none of these studies have considered interpretability and complexity from human perspectives. Since interpretability is of a subjective nature, it is essential to see how people perceive interpretability and complexity particularly in relation to creating FLSs. Therefore, in this paper we have investigated this issue using an initial user study. This is the first time that a user study has been used to assess the interpretability and complexity of designs in relation to creating FLSs. The user study involved a range of expert practitioners in FLSs and received a diverse set of answers. We are interested to see whether, from the perspectives of people, FLSs are necessarily more interpretable when they are less complex in terms of their design. Although the initial user study is based on small samples (i.e., 25 participants), nevertheless this research provides initial insight into this issue that motivates our future research

    Interpretability and complexity of design in the creation of fuzzy logic systems: a user study

    Get PDF
    In recent years, researchers have become increasingly more interested in designing an interpretable Fuzzy Logic System (FLS). Many studies have claimed that reducing the complexity of FLSs can lead to improved model interpretability. That is, reducing the number of rules tends to reduce the complexity of FLSs, thus improving their interpretability. However, none of these studies have considered interpretability and complexity from human perspectives. Since interpretability is of a subjective nature, it is essential to see how people perceive interpretability and complexity particularly in relation to creating FLSs. Therefore, in this paper we have investigated this issue using an initial user study. This is the first time that a user study has been used to assess the interpretability and complexity of designs in relation to creating FLSs. The user study involved a range of expert practitioners in FLSs and received a diverse set of answers. We are interested to see whether, from the perspectives of people, FLSs are necessarily more interpretable when they are less complex in terms of their design. Although the initial user study is based on small samples (i.e., 25 participants), nevertheless this research provides initial insight into this issue that motivates our future research

    Quality assessment of OpenStreetMap data using trajectory mining

    Get PDF
    OpenStreetMap (OSM) data are widely used but their reliability is still variable. Many contributors to OSM have not been trained in geography or surveying and consequently their contributions, including geometry and attribute data inserts, deletions, and updates, can be inaccurate, incomplete, inconsistent, or vague. There are some mechanisms and applications dedicated to discovering bugs and errors in OSM data. Such systems can remove errors through user-checks and applying predefined rules but they need an extra control process to check the real-world validity of suspected errors and bugs. This paper focuses on finding bugs and errors based on patterns and rules extracted from the tracking data of users. The underlying idea is that certain characteristics of user trajectories are directly linked to the type of feature. Using such rules, some sets of potential bugs and errors can be identified and stored for further investigation
    corecore